Human Algorithmic Stability and Human Rademacher Complexity

نویسندگان

  • Mehrnoosh Vahdat
  • Luca Oneto
  • Alessandro Ghio
  • Davide Anguita
  • Mathias Funk
  • Matthias Rauterberg
چکیده

In Machine Learning (ML), the learning process of an algorithm given a set of evidences is studied via complexity measures. The way towards using ML complexity measures in the Human Learning (HL) domain has been paved by a previous study, which introduced Human Rademacher Complexity (HRC): in this work, we introduce Human Algorithmic Stability (HAS). Exploratory experiments, performed on a group of students, show the superiority of HAS against HRC, since HAS allows grasping the nature and complexity of the task to learn.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Can machine learning explain human learning?

Learning Analytics (LA) has a major interest in exploring and understanding the learning process of humans and, for this purpose, benefits from both Cognitive Science, which studies how humans learn, and Machine Learning, which studies how algorithms learn from data. Usually, Machine Learning is exploited as a tool for analyzing data coming from experimental studies, but it has been recently ap...

متن کامل

Applications of Empirical Processes in Learning Theory: Algorithmic Stability and Performance Guarantees

First, we demonstrate how the Contraction Lemma for Rademacher averages can be used to obtain tight performance guarantees for learning methods [3]. In particular, we derive risk bounds for a greedy mixture density estimation procedure. We prove that, unlike what is suggested in the literature, the number of terms in the mixture is not a bias-variance trade-off for the performance. Our upper bo...

متن کامل

Human Rademacher Complexity

We propose to use Rademacher complexity, originally developed in computational learning theory, as a measure of human learning capacity. Rademacher complexity measures a learner’s ability to fit random labels, and can be used to bound the learner’s true error based on the observed training sample error. We first review the definition of Rademacher complexity and its generalization bound. We the...

متن کامل

Algorithmic Stability 3 4 Regularization Algorithms in an RKHS

In the last few lectures we have seen a number of different generalization error bounds for learning algorithms, using notions such as the growth function and VC dimension; covering numbers, pseudo-dimension, and fatshattering dimension; margins; and Rademacher averages. While these bounds are different in nature and apply in different contexts, a unifying factor that they all share is that tha...

متن کامل

Simple Risk Bounds for Position-Sensitive Max-Margin Ranking Algorithms

R bounds for position-sensitive max-margin ranking algorithms can be derived straightforwardly from a structural result for Rademacher averages presented by [1]. We apply this result to pairwise and listwise hinge loss that are position-sensitive by virtue of rescaling the margin by a pairwise or listwise position-sensitive prediction loss. Similar bounds have recently been presented for probab...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015